ai-enabled system
Greening AI-enabled Systems with Software Engineering: A Research Agenda for Environmentally Sustainable AI Practices
Cruz, Luís, Fernandes, João Paulo, Kirkeby, Maja H., Martínez-Fernández, Silverio, Sallou, June, Anwar, Hina, Roque, Enrique Barba, Bogner, Justus, Castaño, Joel, Castor, Fernando, Chasmawala, Aadil, Cunha, Simão, Feitosa, Daniel, González, Alexandra, Jedlitschka, Andreas, Lago, Patricia, Muccini, Henry, Oprescu, Ana, Rani, Pooja, Saraiva, João, Sarro, Federica, Selvan, Raghavendra, Vaidhyanathan, Karthik, Verdecchia, Roberto, Yamshchikov, Ivan P.
The environmental impact of Artificial Intelligence (AI)-enabled systems is increasing rapidly, and software engineering plays a critical role in developing sustainable solutions. The "Greening AI with Software Engineering" CECAM-Lorentz workshop (no. 1358, 2025) funded by the Centre Européen de Calcul Atomique et Moléculaire and the Lorentz Center, provided an interdisciplinary forum for 29 participants, from practitioners to academics, to share knowledge, ideas, practices, and current results dedicated to advancing green software and AI research. The workshop was held February 3-7, 2025, in Lausanne, Switzerland. Through keynotes, flash talks, and collaborative discussions, participants identified and prioritized key challenges for the field. These included energy assessment and standardization, benchmarking practices, sustainability-aware architectures, runtime adaptation, empirical methodologies, and education. This report presents a research agenda emerging from the workshop, outlining open research directions and practical recommendations to guide the development of environmentally sustainable AI-enabled systems rooted in software engineering principles.
- Europe > Switzerland > Vaud > Lausanne (0.24)
- Europe > Switzerland > Zürich > Zürich (0.14)
- North America > United States > New York > New York County > New York City (0.05)
- (23 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (0.34)
What do people expect from Artificial Intelligence? Public opinion on alignment in AI moderation from Germany and the United States
Jungherr, Andreas, Rauchfleisch, Adrian
Recent advances in generative Artificial Intelligence have raised public awareness, shaping expectations and concerns about their societal implications. Central to these debates is the question of AI alignment -- how well AI systems meet public expectations regarding safety, fairness, and social values. However, little is known about what people expect from AI-enabled systems and how these expectations differ across national contexts. We present evidence from two surveys of public preferences for key functional features of AI-enabled systems in Germany (n = 1800) and the United States (n = 1756). We examine support for four types of alignment in AI moderation: accuracy and reliability, safety, bias mitigation, and the promotion of aspirational imaginaries. U.S. respondents report significantly higher AI use and consistently greater support for all alignment features, reflecting broader technological openness and higher societal involvement with AI. In both countries, accuracy and safety enjoy the strongest support, while more normatively charged goals -- like fairness and aspirational imaginaries -- receive more cautious backing, particularly in Germany. We also explore how individual experience with AI, attitudes toward free speech, political ideology, partisan affiliation, and gender shape these preferences. AI use and free speech support explain more variation in Germany. In contrast, U.S. responses show greater attitudinal uniformity, suggesting that higher exposure to AI may consolidate public expectations. These findings contribute to debates on AI governance and cross-national variation in public preferences. More broadly, our study demonstrates the value of empirically grounding AI alignment debates in public attitudes and of explicitly developing normatively grounded expectations into theoretical and policy discussions on the governance of AI-generated content.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- Asia > Taiwan (0.04)
- North America > United States > California (0.04)
- (9 more...)
- Research Report > Experimental Study (1.00)
- Questionnaire & Opinion Survey (1.00)
- Research Report > New Finding (0.93)
- Law > Civil Rights & Constitutional Law (1.00)
- Government (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.66)
Human-centred test and evaluation of military AI
Helmer, David, Boardman, Michael, Conroy, S. Kate, Hepworth, Adam J., Harjani, Manoj
The REAIM 2024 Blueprint for Action states that AI applications in the military domain should be ethical and human-centric and that humans must remain responsible and accountable for their use and effects. Developing rigorous test and evaluation, verification and validation (TEVV) frameworks will contribute to robust oversight mechanisms. TEVV in the development and deployment of AI systems needs to involve human users throughout the lifecycle. Traditional human-centred test and evaluation methods from human factors need to be adapted for deployed AI systems that require ongoing monitoring and evaluation. The language around AI-enabled systems should be shifted to inclusion of the human(s) as a component of the system. Standards and requirements supporting this adjusted definition are needed, as are metrics and means to evaluate them. The need for dialogue between technologists and policymakers on human-centred TEVV will be evergreen, but dialogue needs to be initiated with an objective in mind for it to be productive. Development of TEVV throughout system lifecycle is critical to support this evolution including the issue of human scalability and impact on scale of achievable testing. Communication between technical and non technical communities must be improved to ensure operators and policy-makers understand risk assumed by system use and to better inform research and development. Test and evaluation in support of responsible AI deployment must include the effect of the human to reflect operationally realised system performance. Means of communicating the results of TEVV to those using and making decisions regarding the use of AI based systems will be key in informing risk based decisions regarding use.
- Oceania > Australia > Queensland (0.04)
- North America > United States (0.04)
- Asia > South Korea > Seoul > Seoul (0.04)
- Asia > Singapore (0.04)
- Law (1.00)
- Government > Military (1.00)
Cheap drones can take out expensive military systems, warns former Air Force pilot pushing AI-enabled force
AI-enabled military systems have been effective in battle, but some reliability issues still concern troops and their commanders: former Air Force test pilot. Cheap drones equipped with AI can destroy expensive military equipment, and the Pentagon will need to incorporate autonomous technology into its strategy to advance into the next generation of warfare, a former test pilot and military tech company executive told Fox News. "What we've seen in Europe and other theaters is that they've democratized warfare," said EpiSci Vice President of Tactical Autonomous Systems Chris Gentile. "A $1,000 drone can take out a multimillion-dollar asset." The Pentagon has a portfolio of over 800 contracts for AI-enabled projects.
- North America > United States (1.00)
- Asia > China (0.19)
- Europe > Russia (0.05)
- Asia > Russia (0.05)
- Government > Regional Government > North America Government > United States Government (1.00)
- Government > Military > Air Force (1.00)
Unravelling Responsibility for AI
Porter, Zoe, Al-Qaddoumi, Joanna, Conmy, Philippa Ryan, Morgan, Phillip, McDermid, John, Habli, Ibrahim
To reason about where responsibility does and should lie in complex situations involving AI-enabled systems, we first need a sufficiently clear and detailed cross-disciplinary vocabulary for talking about responsibility. Responsibility is a triadic relation involving an actor, an occurrence, and a way of being responsible. As part of a conscious effort towards 'unravelling' the concept of responsibility to support practical reasoning about responsibility for AI, this paper takes the three-part formulation, 'Actor A is responsible for Occurrence O' and identifies valid combinations of subcategories of A, is responsible for, and O. These valid combinations - which we term "responsibility strings" - are grouped into four senses of responsibility: role-responsibility; causal responsibility; legal liability-responsibility; and moral responsibility. They are illustrated with two running examples, one involving a healthcare AI-based system and another the fatal collision of an AV with a pedestrian in Tempe, Arizona in 2018. The output of the paper is 81 responsibility strings. The aim is that these strings provide the vocabulary for people across disciplines to be clear and specific about the different ways that different actors are responsible for different occurrences within a complex event for which responsibility is sought, allowing for precise and targeted interdisciplinary normative deliberations.
- North America > United States > Arizona > Maricopa County > Tempe (0.24)
- Europe > United Kingdom > England > Oxfordshire > Oxford (0.05)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- (11 more...)
- Law > Torts Law (1.00)
- Law > Criminal Law (1.00)
- Law Enforcement & Public Safety > Crime Prevention & Enforcement (1.00)
- (4 more...)
A Systematic Literature Review of User Trust in AI-Enabled Systems: An HCI Perspective
Bach, Tita Alissa, Khan, Amna, Hallock, Harry, Beltrão, Gabriela, Sousa, Sonia
User trust in Artificial Intelligence (AI) enabled systems has been increasingly recognized and proven as a key element to fostering adoption. It has been suggested that AI-enabled systems must go beyond technical-centric approaches and towards embracing a more human centric approach, a core principle of the human-computer interaction (HCI) field. This review aims to provide an overview of the user trust definitions, influencing factors, and measurement methods from 23 empirical studies to gather insight for future technical and design strategies, research, and initiatives to calibrate the user AI relationship. The findings confirm that there is more than one way to define trust. Selecting the most appropriate trust definition to depict user trust in a specific context should be the focus instead of comparing definitions. User trust in AI-enabled systems is found to be influenced by three main themes, namely socio-ethical considerations, technical and design features, and user characteristics. User characteristics dominate the findings, reinforcing the importance of user involvement from development through to monitoring of AI enabled systems. In conclusion, user trust needs to be addressed directly in every context where AI-enabled systems are being used or discussed. In addition, calibrating the user-AI relationship requires finding the optimal balance that works for not only the user but also the system.
- Europe > Estonia > Harju County > Tallinn (0.05)
- Europe > Germany (0.04)
- Europe > Netherlands (0.04)
- (7 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Health & Medicine > Therapeutic Area (0.67)
- Information Technology > Security & Privacy (0.46)
Optimizing Traditional Agricultural Practices with AI - Datafloq
As the world undergoes change, millions of people will see their livelihoods impacted as time goes on. The advancement of technology has resulted in productivity gains, income gains, and improvements in well-being throughout the ages. Developing an understanding of emerging technologies, including artificial intelligence, machine learning, robotics, etc., will allow us to better respond to the need to ensure food security and sustainable livelihoods in the face of rapidly changing conditions. The constant growth of the world's population necessitates technological solutions to provide food to an ever-growing population, as well as to ensure it is sustainable and in harmony with the environment. To increase productivity within the agrifood sector and improve its sustainability, these solutions are required across various sectors of agriculture – crop and livestock production, aquaculture, fishing, and forestry.
Meet the A3 Artificial Intelligence Tech Strategy Board Members
In the first of our series of A3 interviews with AI leaders, John Lizzi, the Executive Leader - Robotics and Autonomous Systems at GE, discusses how to develop AI projects that focus on business objectives. Lizzi, who serves as the chair of the Association for Advancing Automation's Artificial Intelligence Technology Strategy Board, says that AI is enabling intelligent systems to operate in the complex and uncertain world. Check out his advice on how to craft your AI strategy. How would you advise companies to choose their artificial intelligence projects – and what questions do they need to answer before they begin? Win hearts and minds: I think it's important to note that injecting new and disruptive technology into a business is hard no matter what technology you're talking about.
An Eye on AI: How the Human Element Plays a Role in Today's Tech
Artificial intelligence has become an integral part of the day-to-day operations across most industries. And in great part, AI can be credited with condensing vast amounts of data into something more usable. But as companies come under greater public scrutiny for how algorithms are influencing corporate behavior, the question of how to ethically apply artificial intelligence is top of mind for commercial insurance leaders. Ethical use of technology is "not a problem that's exclusive to AI," Anthony Habayeb, founding CEO of Monitaur, an AI governance company, said. "Corporations have their corporate governance and need to have their opinion of what sort of ethics and practices they bring into the market as a company. And those principles should be implemented in their AI, software and practices overall," he explained.
- Law (1.00)
- Government (0.72)
- Banking & Finance > Insurance (0.52)
The 6-Ds of Creating AI-Enabled Systems
We are entering our tenth year of the current Artificial Intelligence (AI) spring, and, as with previous AI hype cycles, the threat of an AI winter looms. AI winters occurred because of ineffective approaches towards navigating the technology valley of death. The 6-D framework provides an end-to-end framework to successfully navigate this challenge. The 6-D framework starts with problem decomposition to identify potential AI solutions, and ends with considerations for deployment of AI-enabled systems. Each component of the 6-D framework and a precision medicine use case is described in this paper.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Maryland > Prince George's County > Laurel (0.04)
- North America > United States > Florida > Orange County > Orlando (0.04)